E cient Support for Fine - Grain Parallelism on Shared

نویسندگان

  • Vincent W. Freeh
  • David K. Lowenthal
  • Gregory R. Andrews
چکیده

A coarse-grain parallel program typically has one thread (task) per processor, whereas a ne-grain program has one thread for each independent unit of work. Although there are several advantages to ne-grain parallelism, conventional wisdom is that coarse-grain parallelism is more eecient. This paper illustrates the advantages of ne-grain parallelism and presents an eecient implementation for shared-memory machines. The approach has been implemented in a portable software package called Filaments, which employs a unique combination of techniques to achieve eeciency. The performance of the ne-grain programs discussed in this paper is always within 13% of a hand-coded coarse-grain program and is usually within 5 percent.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

cient Support for Fine - Grain Parallelism onShared - Memory Machines

A coarse-grain parallel program typically has one thread (task) per processor, whereas a ne-grain program has one thread for each independent unit of work. Although there are several advantages to ne-grain parallelism, conventional wisdom is that coarse-grain parallelism is more eecient. This paper illustrates the advantages of ne-grain parallelism and presents an eecient implementation for sha...

متن کامل

Efficient support for fine-grain parallelism on shared-memory machines

A coarse-grain parallel program typically has one thread (task) per processor, whereas a fine-grain program has one thread for each independent unit of work. Although there are several advantages to fine-grain parallelism, conventional wisdom is that coarse-grain parallelism is more efficient. This paper illustrates the advantages of fine-grain parallelism and presents an efficient implementati...

متن کامل

Hardware Support for Data Dependence Speculation in Distributed Shared-Memory Multiprocessors Via Cache-block Reconciliation

Data dependence speculation allows a compiler to relax the constraint of data-independence to issue tasks in parallel, increasing the potential for automatic extraction of parallelism from sequential programs. This paper proposes hardware mechanisms to support a data-dependence speculative distributed shared-memory (DDSM) architecture that enable speculative parallelization of programs with irr...

متن کامل

Near Fine Grain Parallel Processing Using Static Scheduling on Single Chip Multiprocessors

With the increase of the number of transistors integrated on a chip, efficient use of transistors and scalable improvement of effective performance of a processor are getting important problems. However, it has been thought that popular superscalar and VLIW would have difficulty to obtain scalable improvement of effective performance in future because of the limitation of instruction level para...

متن کامل

Filaments: Efficient Support for Fine-Grain Parallelism

It has long been thought that coarse-grain parallelism is much more efficient than fine-grain parallelism due to the overhead of process (thread) creation, context switching, and synchronization. On the other hand, there are several advantages to fine-grain parallelism: architecture independence, ease of programming, ease of use as a target for code generation, and load-balancing potential. Thi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996